diagnosing software performance regression
A Zero-Positive Learning Approach for Diagnosing Software Performance Regressions
The field of machine programming (MP), the automation of the development of software, is making notable research advances. This is, in part, due to the emergence of a wide range of novel techniques in machine learning. In this paper, we apply MP to the automation of software performance regression testing. A performance regression is a software performance degradation caused by a code change. We demonstrate AutoPerf's generality and efficacy against 3 types of performance regressions across 10 real performance bugs in 7 benchmark and open-source programs.
Reviews: A Zero-Positive Learning Approach for Diagnosing Software Performance Regressions
STRONG POINTS/CONTRIBUTIONS 1) The false positive rates and false negative rates observed when using AutoPerf are impressively low. NEGATIVE POINTS 1) The paper lacks a lot of technical depth and novelty… autoencoders for anomaly detection are widely used, and the problem domain (detecting performance bugs) has been studied previously as well. Knowing what was changed in the code between P_i and P_i 1 could be very, very helpful. DETAILED COMMENTS One comment is that I'm not sure it makes a lot of sense to train separate autoencoders for each function (or group of functions, if you are doing the k-means thing). Likely, there are going to be certain characteristics of the distributions that are shares across all functions, and I worry that you are wasting a lot of compute power by relearning everything.
Reviews: A Zero-Positive Learning Approach for Diagnosing Software Performance Regressions
This paper describes a system for detecting the source of performance regressions in source code. The idea is to measure performance counters (HPCs) at a per-function level of the code, and then when a performance regression is detected, it is localized by looking for the function with most anomalous performance counters. The anomaly detection is done by training autoencoders on the HPCs, and there is a further idea to cluster functions with similar behavior profiles to avoid the need for learning an autoencoder for every function in a large code base. This is a controversial paper because there is little methodological novelty. R1 gave the lowest score and asks whether we want to allow this kind of paper in NeurIPS, worrying that if we accept any application of ML, then NeurIPS risks becoming too broad.
A Zero-Positive Learning Approach for Diagnosing Software Performance Regressions
The field of machine programming (MP), the automation of the development of software, is making notable research advances. This is, in part, due to the emergence of a wide range of novel techniques in machine learning. In this paper, we apply MP to the automation of software performance regression testing. A performance regression is a software performance degradation caused by a code change. We demonstrate AutoPerf's generality and efficacy against 3 types of performance regressions across 10 real performance bugs in 7 benchmark and open-source programs.
A Zero-Positive Learning Approach for Diagnosing Software Performance Regressions
Alam, Mejbah, Gottschlich, Justin, Tatbul, Nesime, Turek, Javier S., Mattson, Tim, Muzahid, Abdullah
The field of machine programming (MP), the automation of the development of software, is making notable research advances. This is, in part, due to the emergence of a wide range of novel techniques in machine learning. In this paper, we apply MP to the automation of software performance regression testing. A performance regression is a software performance degradation caused by a code change. We demonstrate AutoPerf's generality and efficacy against 3 types of performance regressions across 10 real performance bugs in 7 benchmark and open-source programs.
Intel Research to Solve Real-World Challenges Intel Newsroom
What's New: This week at the annual Neural Information Processing Systems (NeurIPS) conference in Vancouver, British Columbia, Intel is contributing almost three dozen conference, workshop and spotlight papers covering deep equilibrium models, imitation learning, machine programming and more. "Intel continues to push the frontiers in fundamental and applied research as we work to infuse AI everywhere, from low-power devices to data center accelerators. This year at NeurIPS, Intel will present almost three dozen conference and workshop papers. We are fortunate to collaborate with excellent academic communities from around the world on this research, reflecting Intel's commitment to collaboratively advance machine learning." Research topics span the breadth of artificial intelligence (AI) topics, from fundamental understanding of neural networks to applying machine learning to software programming to particle physics.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.25)
- North America > United States > California > Alameda County > Berkeley (0.07)
- North America > United States > Texas (0.05)
- Media > News (0.40)
- Information Technology (0.36)